Meeting

The Rules of AI: Governing Technology in a Geopolitical Age

Friday, November 21, 2025
Courtesy of SEEQC/Reuters
Speakers

Professor and Endowed Chair in Technology, Ethics, and Society; Director, Center for Digital Ethics, Georgetown University; CFR Member

Senior Fellow for AI, Council on Foreign Relations

Author, Governing the Machine: How to Navigate the Risks of AI and Unlock Its True Potential; President and Chief Executive Officer, EqualAI; CFR Member

Presider

Chief Public Policy Officer, General Catalyst

Panelists discuss the geopolitical implications of transformative technologies like artificial intelligence, including how decision-makers are navigating governance, balancing innovation with risk, and addressing questions of equity and accountability.

MUJICA: Welcome to today’s meeting, titled “The Rules of AI: Governing Technology in a Geopolitical Age.” I am Maryam Mujica, chief public policy officer at General Catalyst.

And we are joined today by CFR participants and members attending in person in Washington and virtually over Zoom. We’re going to kick off the discussion and then we’ll open it up for Q&A halfway throughout the hour. And we’ll take questions from in the room and over—some virtual participants online.

So thank you so much for joining us. We have a wonderful panel joining us today, true experts in this field both in the—we have the tech experts, geopolitical experts and expertise. So we’re going to have a great discussion.

First, to my right is Laura DeNardis, who is a professor and endowed chair in technology, ethics, and society; the director for the Center for Digital Ethics at Georgetown University; and a CFR member. And I urge you all to look at their more extensive bios online because they’re quite impressive.

And then we have Vinh Nguyen, who is a senior fellow for artificial intelligence here at the Council on Foreign Relations. He just joined the Council, and he spent over—prior to coming here spent over two decades at the National Security Agency as a chief—and he came as a chief AI officer. So thrilled to have you here too.

And Miriam Vogel, who I’ve long known and I’m so delighted to have her on this panel, the author of Governing the Machine. And she is the president and chief executive officer of EqualAI, and also a CFR member.

Thank you again for making the time today, and thank you all for joining us.

So I would love to kick off the discussion with Laura. You know, when people think about AI, they think about, you know, ChatGPT or Claude. And actually, just to—for everyone to remember, it’s just about three years ago—November 30, 2022—when ChatGPT was released, so we’re at the three-year mark. But what are we actually talking about when we discuss AI more broadly? Because it’s just thrown around and people make assumptions that we’re talking about one thing or another.

DENARDIS: Right. Well, AI is a very general term for technologies that have—they perform tasks that can be performed by human intelligence, but it’s actually an extremely diverse area. ChatGPT and large language models are just one area of that. But even if we were only talking about large language models, even that is a really diverse area of technology. It involves what is the chipset, you know, the compute part of it; the data infrastructure, the cloud computing infrastructure; there’s the large language models themselves. And so, so many layers upon layers of technology that keep everything running. And of course, cybersecurity around AI as well. But beyond that, AI also can see. There’s AI that can see, like reading images. There’s AI that can speak and hear, like the kinds of interactive voice technologies that we use. So when you think about it, it’s many different layers, many different things, and so it’s a diverse area of technology.

MUJICA: This is something that we frequently talk about, sort of the AI stack and that, you know, you have the—like you said, the different layers that come to it. So I think that’s an important thing sort of broadly to bring to the fore.

And so, speaking of all of this that we’re talking about, AI, there’s a governance issue, which is, you know, the central theme to our conversation today. And so I wanted to ask Vinh what—do you think that the—what makes characteristics of this broad technology that we’re talking about difficult to govern and the governance issue?

NGUYEN: Yeah. So Laura mentioned the AI stack, and it’s quite comprehensive. Where the capabilities really lie is in the models and the ecosystem, and how you interact with the AI capabilities.

When you really look at these technologies, there are really about four unique characteristics combined together that makes it hard, right? So the first one is that the technology is probabilistic. For a long time as a computer scientist I know that I could use math to anchor the output and I’m happy with it. But we cannot really do that; it’s probabilistic, meaning you have to, like, manage it.

The second is that it is quite opaque. We truly don’t understand the latest advance and frontier AI capabilities are quite opaque. We don’t have the science yet to really understand how it works completely. We know some part, but not completely. So it’s still a frontier of science.

The third is that it has a lot of feedback, you know, bouncing back and forth between users and machines. And so adding that feedback is creating this very complex, dynamic system.

And then, lastly, you provide that with autonomy.

And so, you know, throughout the human, you know, societies, we have dealt with technologies that are probabilistic; technologies that have feedbacks like irrigation, for example; technologies that are autonomous; and technologies that, you know, we don’t understand. You know, so—but we haven’t really governed technologies with all of the characteristic combined together. And so that makes it unique in different ways, but it’s not, like impossible. I think we have—you know, we thought every technology is impossible to govern until we learn how to govern it. And so we are really at the front of the journey on how to govern technologies that have these unique characteristics.

MUJICA: And moving over—speaking of governance, I wanted to move over to Miriam. Where AI governance has a long history—or, governance has a long history in other domains, such as internet governance, given the limitations of what we do know and what we don’t know yet—to Vinh’s point—what do you think are the most urgent issues that we should address given what we do know now? And I know that in your book that’s sort of a theme, of weighing the—weighing the benefits against the risks. So where do you see sort of the most urgency?

VOGEL: Well, thank you for that question, Maryam. And I’m really glad we’re talking about parsing it out.

We do have frameworks in place. We have precedent. We’re not going at this blind. We have the expertise of understanding how this works big picture and we do have frameworks from cyber, from finance, from health care. We have been using AI and we have been using automated systems in those domains, so we have something to work with.

What’s new here that we need to be very mindful of? Many things. I would say three in particular we should be mindful of.

One is the speed and the scale at which AI operates. That presents some unique challenges. For instance, monitoring. You know, most companies do not yet have governance systems in place. That means they don’t have a framework to test—to monitor. They don’t have clear processes in place. And the pace at which they need to be monitoring has also been significantly increased, given the speed of this technology.

So that makes it harder to find the edge cases. It makes it harder to put compliance in place. It makes it harder to have accountability. We have fractured accountability. Who, within an organization, should own it? That’s a hard question. Is it the chief data officer who has that technical expertise, the general counsel who has the governance expertise? But a question that we can’t pause and admire. We need to make decisions quickly. Another challenge from the speed is our lack of stakeholder input. A unique element of AI is that it iterates and learns new patterns as it goes. And so it becomes paramount that any potential user is taken into account when we’re thinking about the safety and applications of this technology. And so, given the speed at which we’re operating, it’s been hard to make sure we’re getting that relevant real-time input.

Second, the opacity of what we’re talking about here, with the data and the models. We don’t necessarily know for the purpose with which it was designed. We haven’t necessarily seen the data that it was trained on. We don’t know the populations for whom it was imagined. And thus we don’t really know for whom it could fail. That makes it hard for us to put the appropriate documentation, testing, monitoring in place. It also makes it hard—we talked about who is in an organization—it’s hard to pinpoint is the designated point person across industry. It’s hard to know who owns responsibility for this. Across governments we’re struggling to figure that out too. And so these real-time questions are challenging our fractured accountability, in our own homes, in our own organizations, and governments writ large.

The third challenge I think we need to address is that our education has not kept pace with the innovation. So AI is everywhere, but most people don’t realize how many times today alone that they’ve used it. And that means they won’t know how to use it safely. It means they also don’t know how to use it to create more opportunity for themselves. Littler did a really interesting study last year where they surveyed global companies and asked how many were using AI in their HR systems, because that’s one of the most predominant areas of AI use, as you know. And about 82 percent of the HR leads reported using it in their systems. Only about 69 percent of the CEOs of those companies, of the general counsels only 48 percent knew that they were using AI in those systems. And so we need to understand where we’re using it.

We need to understand what it is, thank you for the helpful definition, so that leaders can be acting with knowledge so that organization organizations can adopt frameworks based on their actual use and expectations for projected use. But again, getting back to the first point, good news. This is addressable. Not just in the playbook we have here of the book that my co-authors and I created, Governing the Machine, but there are governance frameworks. And there are a lot more examples now of how to manage these new as well as known governance questions.

MUJICA: I also picked up, there was a statistic in your book about of all the companies using it—which was a very high number—that only a third of them have controls in place. So drastically different numbers in terms of use versus controls to make sure that there are guardrails.

I’d love to shift to the geopolitical side, which will go back to the governance side. But, Laura, you wrote a book recently, Geopolitics at the Internet’s Core. And recently, we saw some news about some state-sponsored hacking that Anthropic spoke about, or released, that they had noticed using their tools. What do recent—these recent state-linked AI-enabled hacking incidents reveal about like the future of cybersecurity? And how should we prepare for a world where AI is used in both the attack and defense?

DENARDIS: Yeah. There’s definitely a geopolitical context, there’s no question about that. You have the Chinese approach of being very state centric and trying to control content and control. It’s very top down. You have the—obviously, the American approach is less regulation, more free market. Then you have the EU regulating everything. This is part of the geopolitical context. Within that framing, it’s almost self-evident what some of the geopolitical dynamics are. The great-power competition over AI leadership. There’s the strategic issue of the distribution of the data centers. They all the national security context. But I wanted to say something that’s a little bit different. And that’s that there really is a tension in the global sphere between multistakeholder governance and now what you could either call sovereignty approaches or, at most, multilateral approaches.

So in the international discussions, everything is coming up multilateral. Actually, Marietje Schaake had an interesting piece in Foreign Affairs about that. And so you can see—do you start with international agreements, or maybe you just start with national regulation? The reason, on your cybersecurity question, about that, the Anthropic, that’s not the only example. There was also the Doppelgänger. But the Anthropic hack, as you would call it, that came out, really says a lot about how the security defense and security offense will play out. One is that the hack, which they attributed to Chinese state actors, bypassed the guardrails that were in Anthropic. And it did it by—and I actually try to have my students do things like this, but on a legal basis. They used very, very small commands. They used small things. And then when you put it together, it can be very powerful. That’s one of the things that it did.

And the other feature of it, getting back to the attributes and the technological affordances of AI, is just that the attack itself happened with very limited human intervention. So I believe Anthropic said it was 80 to 90 percent agentic AI and autonomous technology. So that speaks to the way—where cybersecurity is playing out. One more point about that. It’s also—just reversing that a little bit—it’s now impossible to imagine cybersecurity without AI. It’s impossible to think about it. It’s embedded in everything. It’s used for pattern detection. It’s figuring out when distributed denial of service attacks occur. And, in fact, it would be easier to find areas of cybersecurity that are not embedded with AI—maybe some encryption—than embedded with AI. So there’s a reciprocal relationship with the two. And it’s playing out in geopolitics in every way.

MUJICA: So true. And speaking—

NGUYEN: Can I add—

MUJICA: Oh.

NGUYEN: Can I add to what was said? Was that, you know, when you think about this, the adversaries—you know, in this case, China—did not really hack the system. They actually manipulated and misused it for their own purposes, right? Which is a different problem. And that’s actually the same problem which can—you know, could increase, say, reliability under use of the capabilities of AI, is really balanced against how do you, like, manage the misuse as well. And so this is a very tricky, really, frontier in cybersecurity. My take on this is that what the Chinese actors have demonstrated was that they are willing to actually prototype, do an operational pilot, and they actually, like, deployed them. And we cannot have the first example of a successful, effective agentic AI workflow, you know, attributed to a Chinese intelligence service, right? And so that’s not good.

And so I think on the other side of the defense is that I think there are cybersecurity capabilities, you know, AI powered. But I think the defenders are still, like, hesitant to use them, because they’re not secure yet. And so you don’t want your bodyguard to be, like, he’ll betray you. And so I think there’s a whole thinking of how do we secure our own defenses before we make our defenses autonomous? And I think that is what it’s lagging between the offense and the defense at this point.

MUJICA: Which, this is a—I always remind people, having worked in the tech sector for over a decade, that it’s like everything else in life. It’s a tool that can be used for good or bad purposes. And we’re seeing this in real time with these new models. And we’ve seen it throughout the start of the internet, you know. So I think that that’s very much something we need to remember about how it’s being used on both sides of the equation. But given, you know, all the things that we’re saying about innovation and responsibility, going back to this, Miriam, I just wanted to ask you, what do you—you know, we don’t want to slow down innovation. You’ve said that we need to speed up our responsibility and sort of our approach to how to manage, you could say, these new tools. So what does that look like to you?

VOGEL: I think it’s a really important point, because the whole goal here is not to slow down innovation and progress. Nobody wants that. It is to make sure that we have organizational readiness that matches our technological progress. Because without that you have liability. Without that you don’t have adoption. We’re seeing all the headlines now for the lack of return on investment that companies are experiencing. I firmly believe that it’s because we don’t have enough governance in place. In the companies and the organizations where there is clear governance, people trust the system. They trust that what they’re using is safe. It will not get them in trouble. It will not harm themselves or their customers or their families. They know when and how to use it safely. That’s the big picture of governance. That brings adoption. That brings productivity and return on gains.

So what does that look like in real time? It means that you cannot proceed without ownership. The organizations that don’t have C-suite accountability will be very hamstrung to proceed. You really need that leadership at the top to own your AI. It’s not to be put off to IT to go solve on their own. Because you’re going to need that oversight, that budget reinforcement. You’re going to need that purview across enterprise, because AI is going to touch all the different departments. And you need someone who makes sure that there’s visibility overall. Which brings me to visibility. You cannot audit your AI footprint if you don’t know where you’re using it. So you really need to be surveying your system so you know what it is that you’re trying to be accountable for.

The companies that we work with at EqualAI are laser-focused on operationalization. And that’s the key here. Starting with your AI principles is good. And there are frameworks out there that are very helpful in this process—the NIST Risk Management Framework and so forth. But it can’t stop there. It cannot live as a PDF. It has to be operationalized and revisited across enterprise. You need to have education and literacy. You cannot proceed with putting an app, by putting a browser on your enterprise homepage and thinking people will know how to use it, or that they should be using it safely. You need to invite your employees, your consumers on this journey with you. It’s a great way to increase your productivity, but it also ensures that they’re actually using it, knowing what your expectations are, knowing what the limits are, how not to be using it.

So it’s thinking about this like any other innovation. Again, we can put this in the precedent of history. None of us would step foot in a car if we didn’t think that there were standards and universal safeguards in place—brakes, air bags, a safety protocol for odometers. That’s what you need to do with AI, so that people will get in the system with you, use it, and help you benefit from it.

MUJICA: You’ve touched upon, sort of, governance. And you mentioned companies in this answer, and in a previous answer, I’d love to ask Vinh the AI governance oxymoron, in terms of who is doing the governing and what does privatization of governance mean for accountability?

NGUYEN: Yeah, thank you. And I love to hear from my colleagues as well. This is a very interesting topic, right? It’s quite dynamic. Basically, who are the new governors of AI? And right now, I would say there are—I got, like, five that we don’t talk about—I mean, it’s obvious, but we don’t really think about if they’re truly the governors, right? So, of course, the obvious one is the frontier labs. They’re making decisions every single day, trying to develop technologies. And they’re going to make it. And, you know, it’s part of the competition. You have the hyperscalers. You know, for those who don’t know, these are, like, the Amazon Web Services, the Google Cloud, the Microsoft Azure, you know, Oracle, IBM. So these are, like, the people who are actually running your virtual infrastructure, digital services. They are running AI on top of their space. And so they are also the governor.

The third groups of people would be the enterprises, the employers. And so the people who I’m sure Miriam worked pretty closely with in terms of deploying. They are making decisions. If they’re doing scalable deployments, it’s available for lots—and millions and billions of people, they are making decisions in how to do this responsibly. And I would say the consumers do have a voice too, but we don’t use them, right? And so, like, say, if you’re using ChatGPT, or Gemini, or Claude, there’s always a button—I’m not sure how many people use them—either thumb-up or thumb-down, which is not that useful. However, there’s a report. (Laughs.) And you can put it in there and say it is not doing what I’m doing—like, what I’m asking. But people don’t use them. So I think there is a voice for consumer as well. But these are, like, the few people doing that.

I would say the last group that we don’t really talk about are the cross-sector, cross-industry, and civil society groups coming together to engineer security assurance at the lowest level. We tend to see them as engineering groups. You know, like the Coalition for Secure AI is a good example, the Cloud Security Alliance. They build, like, frameworks. They build technologies right into the toolings, working along with industry. And they get embedded into the governance system. And so we have to think creatively on who are the new governors and how do we actually empower and encourage and shape how we want, you know, this AI future to be.

VOGEL: I couldn’t agree with Vinh more that this is a team sport. We absolutely all have a role to play in governance. That’s the good and bad news. We can’t point fingers to someone else to govern it for us. There are certainly pieces that we need—the large-scale deployers, that we need the frontier models, that we need governments to do. But as a consumer, as an executive, there is no doubt we all have a role to play. The other piece, I think, needs to be talked about—are there any lawyers in the room? (Laughter.) Yeah, good news. That we’re looking at the state houses. We need to be looking at the courthouses. That’s where most of this will play out.

We have three chapters in the book dedicated to looking at laws on the books, which is a little precarious when you’re talking about AI and policy because it changed every single iteration of the revisions of the book. But we do have frameworks in place. The baseline of what we wrote in the book doesn’t change, where you’re talking about torts, where you’re talking about contracts, all different kinds of product liability. Will this continue to be viewed as a service, will it be a product? These are the questions that will be decided based on existing frameworks in our U.S. courts and courts across the globe. And so good news, you’re not going out of business. (Laughs.)

DENARDIS: Yeah, I’ll even add, I’ll heap on top of that even. So we have the digital mediation of the public sphere in every way. And we have the privatization of the conditions of civil liberties and national security within that, in many ways. And, you know, another entire class of actors is really in the technical infrastructure that is beneath the level of AI. So imagine a day in AI without satellite infrastructure. It’s very difficult to do that. Think about the network operators, about internet exchange points. It runs over the internet. And there are a number of chokepoints for control, places where things can be moderated, places where things can be disrupted. And a lot of this critical privatization and governance takes place in that deeper infrastructure, too.

MUJICA: So true. Well, we’re speaking about all the different actors. I’d love to take this, again, to the different actors on the global scale. So you have countries becoming more protectionist, more inward looking. And you have Europe trying to regulate. They’ve traditionally taken sort of a lead role in regulating in the tech space. And in the U.S. very focused on innovating. And China is in the wings. And we see local laws proliferating. I think we’ve had over 1,200 laws at the state and local levels being introduced, and some of—many of which have been passed across the states. How do you see this playing out, Laura?

DENARDIS: How is that playing out? Well, a few things. It is really interesting—it’s very much like what has unfolded with the internet, where there is a resurgence of cyber sovereignty models, AI sovereignty models. And I think we’ll continue to see that. We’ll see it at the level of even, like, physical infrastructure, the rare earth metals. We’ll see it at how data infrastructure is distributed and data localization laws around that. It has been really interesting for me to see the amount of sovereignty, and not only in how things are actually playing out in the real world but even the way that academics describe it, and the books that people are writing, and the discussions in the international sphere. It is very, very—I don’t want to say reductionist, but it’s not matching how things actually work, right?

So we have this surge of cyber sovereignty, of AI sovereignty. And I think we’ll continue to see that. And we’ll also see one—you asked me to predict—we’ll see an increasing co-opting of the infrastructure around that for different kinds of political means that—and political objectives that have nothing to do with the way that the technology is actually being used. So the same thing that happened with the internet, where you co-opt the domain name system, for example, for censorship and for intellectual property rights enforcement, we will see the co-opting of AI infrastructure for different kinds of geopolitical means and objectives on the international stage.

MUJICA: It will surely stay interesting.

We are at the 12:30 mark, so at this time I would like to invite participants here in Washington and online to join our conversation with their questions. And just a reminder that this meeting is on the record. Do we have some questions?

Q: Hi. Thank you for a fascinating conversation. Munish Walther-Puri TPO Group.

I wanted to push on the governors or stakeholders idea. I noticed you didn’t mention investors. And I was curious where you see them fitting in. And then I would love to hear from the panel about this recent EO. The full draft was released from the administration about an AI Litigation Task Force to go after states and they’re trying to put rules and governance around AI at the state level. Thank you.

VOGEL: I’m glad you raised investors. I really want to underscore how important that is. I mean when we say everyone has a role, we really mean it. And they certainly have a larger thumb on the scale than others. If an investor can be given—boards as well, another important audience that we should be mentioning that have the power to ask questions that can really move the needle. And it’s really in your best interest. If you’re an investor and you’re not asking questions about AI governance, you’re bringing on tremendous potential liability if—in addition to the return on investment we mentioned. You know, McKinsey had a ’25 poll on AI that indicated that one of the highest indicators of a high yield of return of generative AI is CEO adoption of AI governance. So for a variety of reason is your best interest to make sure that you’re getting your return on the investment, that you’re scaling, it, that the employees and consumers trust it, that you’re not ending up on the front of the fold of the Wall Street Journal, and not ending up in the courts.

MUJICA: And please state your name and affiliation before your question.

Q: Hi. My name is Soribel. Former DHS, now have my own consultancy.

My question is, I guess, to Mariam. You said, operationalizing AI governance is where things should be at this point right now. What I’m seeing in my own practice is bit of a reluctance to invest in operationalizing AI governance. And I want to go down to, like, you know, the—you know, the—like, the basic level. You need people, right? You need people in all functions to start operationalizing that. But I see a reluctance to kind of invest in that. Do you see that as well in your work with companies and CEOs?

VOGEL: I mean, as Maryam mentioned, we’re seeing across the scale where 80-90 percent of companies are reporting AI use in some surveys, 11 percent in some surveys report having AI controls in place or a framework. So that’s clearly what’s happening. I’d be curious to hear if they’re telling you why. Is it investment? Is it lack of education? Is it lack of clarity on what it means to govern? So it looks like you don’t have the microphone, but we’d love to hear more about why. We have the privilege with EqualAI of working with companies that have self-selected. They are working with us because they want to operationalize their AI governance, they want to talk with other companies that are deeply invested in this space. So day-to-day we have the privilege of not having that conversation. But when we’re in broader audiences, of course, is not representative.

Q: Hi. Nelson Cunningham, formerly of the State Department, but before that, McLarty Associates.

The other day I read a fascinating book called, If Anyone Builds It, We All Die (sic; If Anyone Builds It, Everyone Dies). And I hosted the author for a book talk. Miriam, you were there. Maryam, you were not. (Laughter.) I’m sorry you weren’t. And the author’s thesis, Nate Soares, is that if super—if AI becomes super intelligent, which you define as sort of roughly kind of human intelligence, it won’t need human beings anymore. In fact, it’ll say, well, electric power, yeah, I need more electric power. You? Not so much. Cornfields? Yeah, I need those for solar energy production, not for food. I don’t care about food. So—and it was actually a chilling book, chilling argument, and really all too realistic. How do we govern away from AI taking control of the steering wheel and leaving us in the dark?

MUJICA: Vinh, do you want to take that?

NGUYEN: Yeah. So, you know, what I sense, and, you know, as I interact with many members of my different communities, is there are, you know, the security realists, and then you have the safety community. And everyone has, like, different, like, approaches in how to tackle these. I would say that from the security community standpoint, I think that community is very practical. They need to lock down. If our foundations for AI, the infrastructure, everything below that, is, like, form—like, it’s not—you’re not going to be a skyscraper of super intelligence on top of that, which is falling apart. And so I think there are different views in terms of risk. The two communities talk about risk very differently. Cybersecurity think about risk in a very realistic sense. You know, the threats, the vulnerabilities, the impact, right, the consequences. And then that’s, like, the risk—the financial industry talk about risk in a very different way, much more comprehensive than even cybersecurity, of course. And the safety safeguards community, they think about it differently.

My take is that there are challenges ahead. We don’t know the pathway to super intelligence. We can think about it. We can maybe build scenarios. But the core problems are, like, today, that we barely have solved, that are right in front of us. And if we don’t even solve that, I don’t think we can even get to a super intelligence that we can manage, right? At the end of the day you need something to anchor, and security that is—you know, that we trust—going back to the trust—that we trust that we can anchor is, like, our best bet to manage and govern and limit any blast radius of super intelligence. So if you don’t do that, then, of course, you will have different challenges. So my focus has always been, how do you lock down? How do you really anchor? How do we, as humans, have the power over the security and the safeguards so we can lock them down and we can contain, we can limit, we can do other things? If we don’t have any of that, then, of course, we may end up in a different future.

MUJICA: Laura, would you add to that?

DENARDIS: Yeah. Those kinds of dystopian arguments, I think, are really problematic, and they’re not helping us at all. If someone really believed those kinds of things then the only thing that they should be doing is trying to stop AI in its tracks. And in fact, there are people that have been trying to do that in government. Remember, Italy initially banned large language models. And I believe here in Washington, and beyond, there was a letter circulating with 30,000 people, signatories—there may be some here in this room—who said, let’s slow down AI. I mean, that’s the only—that’s the only thing that you could do if you really believed that. But it does—there are two problems with that.

One is that the same technologies are doing some amazing things in society, such as my medical school colleagues, what they’re doing for drug discovery and things in the healthcare field. You know, there’s so many good applications of those same kinds of things. And then the other thing that it does that’s problematic is it distracts, as you said, from the really big problems that we have. What is the workforce of the future going to be? How do you clarify liability around these things? What kind of risk and safety design should you engineer into the system? And how do we sort out things like intellectual property rights? So there are very real issues, and that’s what we should focus on.

VOGEL: And, Nelson, if you don’t mind my adding on, I’m really glad you raised this question. It needs to be addressed. People out there are very afraid of this. People who are committing their careers to focusing on this concern. And we have to address it. And it shouldn’t be minimized, but it also shouldn’t take precedence. I think we can have this conversation in a “yes, and” format. That is something we need to be thinking about. We need to be governing. The good news is AI governance accounts for various risks. And that risk is for someone equally concerning to what other categories of risks. We have nine that we outline in our book. I think it covers the universe, but please tell me if you see any we’ve missed. That is one. Workforce displacement, as Laura mentioned, is equally a concern we need to be thinking about—workforce. We need to be thinking about environmental impact. We need to be thinking about privacy concerns.

For any particular use, it can be the primary risk. And until we have a governance system in place that addresses all of them, we will not see adoption, we will not be trusting the machines, because they won’t be deserving of our trust. So what does that mean? For instance, human in the loop. By the way, you’re violating the EU AI Act if you’re not ensuring that you have humans in the loop in an automated system. So it’s not only the right thing to do to avoid this catastrophic outcome, it’s actually a legal violation if you’re not thinking about where we maintain our ownership and our agency when we’re talking about AI.

MUJICA: And so much of what you all are describing really goes back to education and really getting people to understand more and better how the technology is working, and all the sort of aspects surrounding it.

Our next question is going to come from an online participant.

OPERATOR: We will take our next question from Hoyt Webb.

Q: Hello. Thank you for the lively discussion. My name is Hoyt Webb. I am General Counsel at Legrand, North and Central America.

We are one of the companies who has employed AI very judiciously. And we’ve discovered what you’ve all described as the, you know, need to be very detail oriented, to work with good training, and, of course, the better the data the better the outcome. And I just wanted to share that for those who are contemplating it.

The question I have, or the quandary I want to pose, is one of the subcategories that Professor DeNardis mentioned, which is utilization of these tools and unwitting understanding of what has been included in their training. So the case in point here is Workday, right? You’ve got a bunch of companies that utilize Workday, enabled the AI module for the screening of resumes by the AI subfunction within Workday. Leading to, unfortunately, a determination in court in California of bias. That a particular person was screened out because of his African American heritage from over a hundred companies using Workday. And they lost that case.

The takeaways from that case were, A, the company’s responsible for a bias existing in the AI it uses. And, B, that case was certified as a class action. How would you advise companies evaluating whether or not to employ tools being offered by AI companies to guard against this type of problem, right? The companies are not themselves going to have the intelligence to know how the model was trained, and may be unexpectedly subscribing to biased results, leaving the hallucinations aside.

MUJICA: Miriam, do you want to start with that? And then we can go to other panelists.

VOGEL: Sure. I think this is one of the hardest questions. We’re all looking for support systems here. We’re trying to outsource expertise that we don’t necessarily have in-house. Which are the tools that will help us solve for some of these questions? So with our corporate members, it’s a question that we have routinely. Which tools, which consultants are actually adding value? The one thing that we’ve heard is the more audacious the claim—we are bias free, we will ensure there’s no discrimination, we will make sure you’re fully compliant—the more audacious the claim, the more concerned you should be. (Laughter.) That that is across the board what we’re seeing. And it’s something you have to be really mindful of.

You know, we should—we have a list of questions we try to offer in the book. We have questions we have—that we offer in a playbook we have online on governance. You should be empowered to ask questions and know if it sounds like a reasonable answer so that you can follow up with a backup question. You want to know, what’s behind the claims? What safeguards have they put in place to ensure that they are providing you the compliance function and that they are not inviting additional liability? So the more you’re asking about these specific questions and making sure their answers resonate, the better off you are.

MUJICA: Laura, you’ve been in the private sector. And you’re teaching now. Would you add anything to that, having been in sort of different environments, including the private sector, where they’re using these tools?

DENARDIS: And I also was an advisor in the State Department on international communication policy, so I’ve kind of seen it from different angles. But really I kind of look at this as an engineer. You know, this is—this is a problem that is historically a problem. And, you know, we’ve seen it in every iteration of technology, from some of the terrible examples, I won’t even say them here, of image recognition and how things were labeled to, you know, ideas like having a—let’s have an AI beauty contest around images. What can go wrong with that? You know, there are many things that can happen. But there are also biases that happen when you’re doing this around humans, right?

I think it’s important to say that. Like, the reason there are biases is because they come from human beings. And if you took the AI out of the equation, you still would have the biases, right? So it’s the same kind of thing. How do you have accountability, testing? There’s no way to test every single outcome in AI. It’s just too voluminous. But you could test, you know, 5 percent, 1 percent, to see what—when you have an opaque algorithm, you could have bias in the data, but you could also have bias in the algorithm. You could have bias in the model. You could have bias in the application, like a chatbot that is based on the model. And then you could also have bias in the way that it’s applied. So it has to be tested in all of those environments, hopefully through a human in the loop.

Q: Hi. I’m Anna Rubinstein. And I’m the chief of responsible AI at the National Geospatial Intelligence Agency.

So I’ve noticed across the panel and in the room people talk about AI with different levels of anthropomorphism. And I was curious about how you feel like that is helpful or harmful in conversations about governance.

DENARDIS: Let me tell you about my students. (Laughter.) So about a year and a half ago I gave my cybersecurity and society students an assignment to create deepfakes that could either be, you know, damaging on a cybersecurity level or something with geopolitical implications. And one of the team—I separated them into team audio, team image, team video, and team text. And the team audio was the most interesting one because they did a deepfake of my voice. And so there I was, inviting everybody to a reality series called Dancing with DeNardis. (Laughter.) You know, it’s very difficult to not anthropomorphize it, you know, when you hear your own voice in it. When I heard that, the thing that it made me realize is just how personal all of this is. That’s not even that personal of an example. You know, if someone is the victim of sexual abusive material or, you know, other kinds of harassment through AI, it’s extremely personal. It’s difficult not to anthropomorphize it in that kind of a case.

I mean, for me it made me think I’ll never use voice recognition again to access a bank or anything else when I heard how it sounded like me. And, you know, but the human question around that is part of it has to do with the way science fiction has spoken about AI. Part of it has to do with the fact that we don’t yet have the vocabulary. I think it’s really important to remember that it’s a machine. It’s really important to remember—and just to give you one more personal example, I did create a chatbot that I could become friends with. See, I’m doing the same language that you talked about. What do I call that? It’s a meaningless, vacuous experience to be talking to a chatbot. But, because of the anthropomorphizing that we’ve been doing of it, we have young people who think it’s real. And maybe there were times when I felt, oh, it’s talking to me. And I have to catch myself, like, even if it’s just in a nanosecond. I think we need new vocabulary around this. And that’s part of the struggle.

MUJICA: We have lots of questions in the back. I don’t want to—

Q: Thanks. Hi. Sean Quirk. I’m a lawyer here in D.C. I appreciate the comment about business being booming for the forthcoming future.

My question is about the follow-on comment about the courts regulating AI in different sectors. So whether it’s copyright, contracts, employment law. And where that patchwork, that piecemeal regulation that might come through the courts will be insufficient. So my question for the panel is, where do you think—what areas of AI are you most concerned about, and you think there might need to be some sort of federal regulation to cover the gaps that current laws don’t cover?

VOGEL: I’m happy to jump in. A bunch of different ways I want to answer that, but I’ll focus on two. We need to align best practices and definitions. I don’t think it is helpful to companies to be unclear on what their liability will be in a few years while they are building and deploying at scale today. I think we need to be clear on what’s an AI incident? Within an organization, what is the catastrophic or harm that you should be looking for that AI presents? Right now, there’s not a standard definition. Every company needs to realize that they need to make that definition, and put a process in place, and then try and identify who within the government could or might, or might not be, the appropriate contact? Because that is not delineated. So I do think there are areas where it would be not controversial at all to say that we really need more specificity.

The other area that I think is overlooked is the education piece. Right now, we’re depending on the employers to be doing the upskilling. And there was a recent poll. About 72 percent of employees surveyed said that they wanted their employer to be upskilling them. They wanted that education to be coming from their employer. Maybe they wanted other sources too and that was just the first that they knew of or were asked about. But the problem is we don’t have—we haven’t defined what skills that we’d want those employers to be training. We haven’t identified which programs are effective. We don’t want them just to be spending time and resources and having their employees doing this work. So I think clarifying what skills they are that we want to be teaching, what are the effective programs.

And I do think this is a national-level government issue because you’re seeing the consequence. Most Americans are afraid of AI. Recent surveys, a Pew survey, indicated that 50 percent of people are—in the U.S.—are more afraid than excited about AI. And the problems get deeper when you divide that up. There’s a gender gap. There’s an age gap. And age gap’s over thirty. So I think we’re all in the gap here. There’s a regional gap. The middle of America is not using it as much as the coasts. There’s many reasons for that. But we need to address that so that we all can benefit, we all know how to use it safely, and that we all can use it for the opportunity it presents. And I do think it requires a nationwide initiative to invite this, to encourage this, to invest in this, to make sure it’s happening across the board.

MUJICA: Laura, were you going to add something?

DENARDIS: Yeah, maybe one more—one more quick one. Who owns culture, right? This is more of a California question than a D.C. question. But this—the question of copyright and intellectual property rights around this is very, very complex. And it’s not just the usage and all the lawsuits that are playing out about the usage of the data and the training of the models. But it’s who owns the creative products, and the culture, and the screenwriting, and the music when—or a book, when AI is used to either assist or to help in that? And it’s been really interesting to see the reversal of people who used to be copyright maximalists are now minimalists. And, you know, it’s kind of reversed itself in some really interesting ways. And I think that it’s up in the air at this point. And it really needs a lot of regulatory clarification.

MUJICA: Maybe the question on the right.

Q: Hi. I’m Daniel Sepulveda. I worked in this space at the State Department in the Obama administration, and now I’m private consultant.

I was wondering if you could speak to the way we in the United States have organized both the executive branch and the congressional branch to lead in this discussion globally, and then to assess and address risks domestically.

VOGEL: I’m happy to start us off. As you know from your extensive work in the executive branch, there are many tools that have been used to regulate and standardize our AI policies. There’s executive orders that we’ve been seeing for several years that have significant impact on our approach to AI. There’s been voluntary regulation, voluntary statements in the kind that we saw a few years ago, where agencies came out and made historic joint statements. For instance, the Department of Justice worked with the EEOC, and the Consumer Finance Protection Board, and the FTC to come together with joint statements on how they would be approaching AI governance. We’ve also seen one of the most impactful, I would argue, AI policy measures come out of the executive branch. And that’s the NIST Risk Management Framework, although you could argue various people get credit for that because it was also the basis of a congressional mandate for NIST to create that framework.

That one is also a really interesting model to look at. It’s a voluntary framework. It’s adopted by almost all the companies I work—by all the companies I work with, and almost all the companies I’ve spoken with adopt the framework in some way across the globe. It’s law agnostic. It’s for nonprofits as well as for-profits. It’s for companies of all different sizes. It also was, I think, particularly effective because of the way it was constructed, which I think is a model we all should think about, where there was multistakeholder buy-in at every stage of the process, where they had open drafts for people to comment on, and they invited perspectives across the board to make sure that that was read in to what they were creating. On the congressional side, we’ve seen the creation of committees. I see my fellow commissioner, Trooper Sanders, here from the National AI Advisory Committee, a national AI initiative. Deep investment at the Defense Department, the National Science Foundation, allocated or designated by Congress.

MUJICA: Vinh, since you just came from government, I wanted to see if you would add anything to that.

NGUYEN: Yeah. So I would say, for AI governance, a very effective tool would be, like, procurement, right? And we—you know, industry does this, you know, through contracts as well. You know, how do you manage the relationship between the two parties, you know, be it data retention for AI, or how do you address the various, you know, accountabilities among the various parties. Procurement is one. The other one that we haven’t really seen is an effective insurance market to actually absorb and, like, help industry to actually accelerate the adoption.

If you think about it, this is really a triangle, right? You have enterprises who want to go further, but then they don’t know yet what liabilities that they will hit, you know, in, like, you know, a year or two. And then you have a very nascent auditing, assurance, you know, standard ecosystem to actually help industry to say, yes, this is a set of standards that, you know, is recognized. There is an ISO 42001. There is a Cloud Security Alliance, you know, star AI standard. They’re complementary to the NIST AI Risk Management Framework. So industry is pulling together, but the auditors are not here yet. The assurance, you know, it’s not there. And the insurance market is not, like, in place yet.

And so I think for governments there are non-traditional—we have to think creatively in terms of how to activate, accelerate adoption, not using just traditional means because the traditions means may not work. Procurement is another way to do it as well. But right now, what I see are, like, the most effective way is, you know, be it procurement or contracts to move forward on governance.

MUJICA: I think we have time just for one last question. There, in the middle of the room.

Q: Macani Toungara, Dell Technologies.

My question is, which of the AI providers do you think has the best internal governance? Who have you seen stand out within the industry, and why?

MUJICA: Miriam, you’re talking to corporate clients, and they’re using all these tools, would any—

VOGEL: Yeah. The good news is there are several models. So in our badge program, we work with Google, DeepMind, Microsoft, Amazon, Salesforce, who have significant resources and infrastructure in place on AI governance that we learn from. Of our members, they all have strong governance in place. So you can look on our website—Verizon, PepsiCo. Fortunately, there are many leaders out there that are both making sure we get AI safely, but also have models out there that we can all learn from.

NGUYEN: Yeah. I would just add that there are two basic criteria, right? When you go to a company and enterprise, how are you doing with your data interoperability? How do you actually govern your data? If you’re not governing data, forget about governing AI, right? Then, secondly, do you have the—you know, the set of best practices that, you know, you laid out in terms of putting the integrity, the process, the government process, in place? Because I would say that, from my partnership with many parts of the industry, Miriam is actually right, shadow AI or AI that the leadership doesn’t even know exists in their enterprise. It’s actually quite rampant. If you look at some of the IBM reporting, you know, cost of a breach, the shadow AI risk is quite high. If you don’t know where your AI is and where your data is, you don’t have much of a governance anyway. And so that is how you would set your, you know, very basic questions you have to ask. You know, spend an hour to say, how do you do with your data? And do you know where your AIs are? And if they say, no, no, then no governance. Very simple. (Laughter.)

MUJICA: One minute left. Anything to add to that, Laura?

DENARDIS: Well, just to add one thing. It’s that it seems—and I do some work with people on boards, on corporate boards. And it seems like we have to shift from just thinking about tech companies and the governance question around that, to all firms being AI firms, and ask the same questions. And they do think of themselves as tech firms, whatever industry they’re in. And now AI is starting to see that the boards do. So I think they’re very aware of this question and, you know, around all issues of corporate governance.

MUJICA: Well, that concludes our meeting. Thank you so much for joining our discussion today. And thank you to our panelists. (Applause.)

(END)

This is an uncorrected transcript.

Top Stories on CFR

Democratic Republic of Congo

In shallowly engaging with Kinshasa and Kigali, Washington does little to promote peace and risks insulating leaders from accountability.

United States

Immigrants have long played a critical role in the U.S. economy, filling labor gaps, driving innovation, and exercising consumer spending power. But political debate over their economic contributions has ramped up under the second Trump administration.

Haiti

The UN authorization of a new security mission in Haiti marks an escalation in efforts to curb surging gang violence. Aimed at alleviating a worsening humanitarian crisis, its militarized approach has nevertheless raised concerns about repeating mistakes from previous interventions.